positive rate is always 100% and the false positive rate is 0%.
ROC curve is approaching to the diagonal line, a classifier makes
by chance. In this scenario, the false positive rate equals the true
ate all the times. For instance, when the false positive rate is 10%
shold, correspondingly, the true positive rate is also 10%. When
positive rate becomes 90%, the false positive rate is also 90%.
ns when the false positive rate is controlled, the true positive rate
ropping to the same level. When an ROC curve is below the
line, it is very likely that the predictions are on the opposite site
aw class labels. If this happens, a higher prediction value is
by a model for a lower-class label value and a lower prediction
delivered for a higher class label value. This happens when a
n is inappropriately designed.
d on an ROC curve, the area under an ROC curve (AUC) is
d, which is between zero and one. Using AUC as a measurement,
tive robustness visualisation using an ROC curve turns to a
ve robustness analysis result.
has a very wide application in biological/medical pattern analysis
For instance, ROC has been used to evaluate the performance of
cation model for assessing the prognostic function of EEG in
rrest practice [Barbella, et al., 2020] and to predict postoperative
c fistula for patients who suffer from distal pancreatectomy when
iatric nutritional risk index [Funamizu, et al., 2020].
f the widely used R packages for ROC analysis is ROCR. To do
analysis for a data set using this package, two main functions are
for a complete ROC analysis. They are the prediction
and the performance function. The former is to establish an
del. The latter is to extract data for drawing an ROC curve or the
he accuracy, etc. Suppose two vectors y and y.hat (ݕො) are ready,
model can be established using the following code,
my.roc=prediction(y.hat,y)
ROC model my.roc is composed of many useful information.
important for further analysis. To draw an ROC curve, a false